Goto

Collaborating Authors

 intrinsic reward


Successor-Predecessor Intrinsic Exploration Changmin Y u 1,2 Neil Burgess

Neural Information Processing Systems

Exploration is essential in reinforcement learning, particularly in environments where external rewards are sparse. Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.


A Algorithms

Neural Information Processing Systems

We directly adopt the official default setting for Atari games. B.2 Minecraft Environment Settings Table 1 outlines how we set up and initialize the environment for each harvest task. Our method is tested in two different biomes: plains and sunflower plains. Both the plains and sunflower plains offer a wider field of view. In Minecraft, the action space is an 8-dimensional multi-discrete space.





Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration

Neural Information Processing Systems

The ability to approach the same problem from different angles is a cornerstone of human intelligence that leads to robust solutions and effective adaptation to problem variations. In contrast, current RL methodologies tend to lead to policies that settle on a single solution to a given problem, making them brittle to problem variations. Replicating human flexibility in reinforcement learning agents is the challenge that we explore in this work.


tion error; right: surprise. α is a hyperparameter we scanned for. Implement a new IM baseline: ICM (Pathak 2017 [23]

Neural Information Processing Systems

We thank the reviewers for the thorough feedbacks. Based on those, we have made numerous improvements. Original code is for decrete actions.) IM baseline with the random object. The plot is similar to "tool" in Figure 1 and we omit it due to space constraints. Rev. #1 suggested that the environments could be solved by classic planning methods.